Goto

Collaborating Authors

 hold ai accountable gain


The movement to hold AI accountable gains more steam

#artificialintelligence

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests. Now, efforts are underway to better understand how AI works and hold users accountable. New York's City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity.


The movement to hold AI accountable gains more steam

#artificialintelligence

"We need to know how the many subjective decisions that go into building a model lead to the observed results, and why those decisions were thought justified at the time, just to have a chance at disentangling everything when something goes wrong," the paper reads. "Algorithmic impact assessments cannot solve all algorithmic harms, but they can put the field and regulators in better positions to avoid the harms in the first place and to act on them once we know more." A revamped version of the Algorithmic Accountability Act, first introduced in 2019, is now being discussed in Congress. According to a draft version of the legislation reviewed by WIRED, the bill would require businesses that use automated decision-making systems in areas such as health care, housing, employment, or education to carry out impact assessments and regularly report results to the FTC. A spokesperson for Senator Ron Wyden (D-Ore.), a cosponsor of the bill, says it calls on the FTC to create a public repository of automated decision-making systems and aims to establish an assessment process to enable future regulation by Congress or agencies like the FTC.


The Movement to Hold AI Accountable Gains More Steam

WIRED

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: A Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests. Now, efforts are underway to better understand how AI works and hold users accountable. New York's City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity.